25 research outputs found

    Towards robust context-sensitive sentence alignment for monolingual corpora

    Get PDF
    Aligning sentences belonging to comparable monolingual corpora has been suggested as a first step towards training text rewriting algorithms, for tasks such as summarization or paraphrasing. We present here a new monolingual sentence alignment algorithm, combining a sentence-based TF*IDF score, turned into a probability distribution using logistic regression, with a global alignment dynamic programming algorithm. Our approach provides a simpler and more robust solution achieving a substantial improvement in accuracy over existing systems.Engineering and Applied Science

    Arabic diacritization using weighted finite-state transducers

    Get PDF
    Arabic is usually written without short vowels and additional diacritics, which are nevertheless important for several applications. We present a novel algorithm for restoring these symbols, using a cascade of probabilistic finite- state transducers trained on the Arabic treebank, integrating a word-based language model, a letter-based language model, and an extremely simple morphological model. This combination of probabilistic methods and simple linguistic information yields high levels of accuracy.Engineering and Applied Science

    Human-centered compression for efficient text input

    Get PDF
    Traditional methods for efficient text entry are based on prediction. Prediction requires a constant context-shift between entering text and selecting or verifying the predictions. Previous research has shown that the advantages offered by prediction are usually eliminated by the cognitive load associated with such context-switching. We present a novel approach that relies on compression. Users are required to compress text using a very simple abbreviation technique that yields an average keystrok reduction of 26.4%. Input text is automatically decoded using weighted finite-state transducers, incorporating both word-based and letter-based n-gram language models. Decoding yields a residual error rate of 3.3%. User experiments show that this approach yields improved text input speeds
    corecore